Wearable sensors for measuring head kinematics can be noisy due to imperfect interfaces with the body. Mouthguards are used to measure head kinematics during impacts in traumatic brain injury (TBI) studies, but deviations from reference kinematics can still occur due to potential looseness. In this study, deep learning is used to compensate for the imperfect interface and improve measurement accuracy. A set of one-dimensional convolutional neural network (1D-CNN) models was developed to denoise mouthguard kinematics measurements along three spatial axes of linear acceleration and angular velocity. The denoised kinematics had significantly reduced errors compared to reference kinematics, and reduced errors in brain injury criteria and tissue strain and strain rate calculated via finite element modeling. The 1D-CNN models were also tested on an on-field dataset of college football impacts and a post-mortem human subject dataset, with similar denoising effects observed. The models can be used to improve detection of head impacts and TBI risk evaluation, and potentially extended to other sensors measuring kinematics.
translated by 谷歌翻译
在各种机器学习问题中,包括转移,多任务,连续和元学习在内,衡量不同任务之间的相似性至关重要。最新的测量任务相似性的方法依赖于体系结构:1)依靠预训练的模型,或2)在任务上进行培训网络,并将正向转移用作任务相似性的代理。在本文中,我们利用了最佳运输理论,并定义了一个新颖的任务嵌入监督分类,该分类是模型的,无训练的,并且能够处理(部分)脱节标签集。简而言之,给定带有地面标签的数据集,我们通过多维缩放和串联数据集样品进行嵌入标签,并具有相应的标签嵌入。然后,我们将两个数据集之间的距离定义为其更新样品之间的2-Wasserstein距离。最后,我们利用2-wasserstein嵌入框架将任务嵌入到矢量空间中,在该空间中,嵌入点之间的欧几里得距离近似于任务之间提出的2-wasserstein距离。我们表明,与最佳传输数据集距离(OTDD)等相关方法相比,所提出的嵌入导致任务的比较显着更快。此外,我们通过各种数值实验证明了我们提出的嵌入的有效性,并显示了我们所提出的距离与任务之间的前进和向后转移之间的统计学意义相关性。
translated by 谷歌翻译
人对象相互作用(HOI)检测在活动理解中起着至关重要的作用。尽管已经取得了重大进展,但交互性学习仍然是HOI检测的一个具有挑战性的问题:现有方法通常会产生冗余的负H-O对提案,并且无法有效提取交互式对。尽管已经在整个身体和部分级别研究了互动率,并促进了H-O配对,但以前的作品仅专注于目标人一次(即,从本地角度来看)并忽略了其他人的信息。在本文中,我们认为同时比较多人的身体零件可以使我们更有用,更补充的互动提示。也就是说,从全球的角度学习身体部分的互动:当对目标人的身体零件互动进行分类时,不仅要从自己/他本人,而且还从图像中的其他人那里探索视觉提示。我们基于自我注意力来构建身体的显着性图,以挖掘交叉人物的信息线索,并学习所有身体零件之间的整体关系。我们评估了广泛使用的基准曲线和V-Coco的建议方法。从我们的新角度来看,整体的全部本地人体互动互动学习可以对最先进的发展取得重大改进。我们的代码可从https://github.com/enlighten0707/body-part-map-for-interactimence获得。
translated by 谷歌翻译
利用机器学习来促进优化过程是一个新兴领域,该领域有望绕过经典迭代求解器在需要接近实时优化的关键应用中引起的基本计算瓶颈。现有的大多数方法都集中在学习数据驱动的优化器上,这些优化器可在解决优化方面更少迭代。在本文中,我们采用了不同的方法,并建议将迭代求解器完全替换为可训练的参数集功能,该功能在单个feed向前输出优化问题的最佳参数/参数。我们将我们的方法表示为学习优化优化过程(循环)。我们显示了学习此类参数功能的可行性,以解决各种经典优化问题,包括线性/非线性回归,主成分分析,基于运输的核心和二次编程在供应管理应用程序中。此外,我们提出了两种学习此类参数函数的替代方法,在循环中有和没有求解器。最后,通过各种数值实验,我们表明训练有素的求解器的数量级可能比经典的迭代求解器快,同时提供了接近最佳的解决方案。
translated by 谷歌翻译
从集合结构的数据学习是一种基本上在机器学习和计算机视觉中的应用程序的重要问题。本文侧重于使用近似最近邻(ANN)解决方案,特别是地区敏感的散列来源的非参数和数据独立于无关的学习。我们考虑从输入集查询设置检索的问题。这样的检索问题需要:1)一种有效的机制来计算集合和2)的距离/异化,以及快速最近邻南搜索的适当数据结构。为此,我们提出切片 - Wasserstein将嵌入作为计算上高效的“Set-2-向量”机制,使下游ANN能够具有理论担保。该组元素被视为来自未知底层分布的样本,并且切片 - Wasserstein距离用于比较集合。我们展示了算法的有效性,表示在各种集合检索数据集上的设定局部敏感散列(Slosh),并将我们提出的嵌入方法与标准集嵌入方法进行比较,包括泛化均值(Gem)嵌入/池,具有额定排序池(FSpool )和协方差汇总并显示出检索结果的一致性。用于复制我们的结果的代码可在此处提供:\ href {https://github.com/mint-vu/slosh} {https://github.com/mint-vu/slosh}。
translated by 谷歌翻译
虽然对理解计算机视觉中的手对象交互进行了重大进展,但机器人执行复杂的灵巧操纵仍然非常具有挑战性。在本文中,我们提出了一种新的平台和管道DEXMV(来自视频的Dexerous操纵)以进行模仿学习。我们设计了一个平台:(i)具有多指机器人手和(ii)计算机视觉系统的复杂灵巧操纵任务的仿真系统,以记录进行相同任务的人类手的大规模示范。在我们的小说管道中,我们从视频中提取3D手和对象姿势,并提出了一种新颖的演示翻译方法,将人类运动转换为机器人示范。然后,我们将多个仿制学习算法与演示进行应用。我们表明,示威活动确实可以通过大幅度提高机器人学习,并解决独自增强学习无法解决的复杂任务。具有视频的项目页面:https://yzqin.github.io/dexmv
translated by 谷歌翻译
Objective: Traumatic brain injury can be caused by head impacts, but many brain injury risk estimation models are not equally accurate across the variety of impacts that patients may undergo and the characteristics of different types of impacts are not well studied. We investigated the spectral characteristics of different head impact types with kinematics classification. Methods: Data was analyzed from 3,262 head impacts from lab reconstruction, American football, mixed martial arts, and publicly available car crash data. A random forest classifier with spectral densities of linear acceleration and angular velocity was built to classify head impact types (e.g., football, car crash, mixed martial arts). To test the classifier robustness, another 271 lab-reconstructed impacts were obtained from 5 other instrumented mouthguards. Finally, with the classifier, type-specific, nearest-neighbor regression models were built for brain strain. Results: The classifier reached a median accuracy of 96% over 1,000 random partitions of training and test sets. The most important features in the classification included both low-frequency and high-frequency features, both linear acceleration features and angular velocity features. Different head impact types had different distributions of spectral densities in low-frequency and high-frequency ranges (e.g., the spectral densities of MMA impacts were higher in high-frequency range than in the low-frequency range). The type-specific regression showed a generally higher R^2-value than baseline models without classification. Conclusion: The machine-learning-based classifier enables a better understanding of the impact kinematics spectral density in different sports, and it can be applied to evaluate the quality of impact-simulation systems and on-field data augmentation.
translated by 谷歌翻译
Few-shot relation extraction (FSRE) aims at recognizing unseen relations by learning with merely a handful of annotated instances. To generalize to new relations more effectively, this paper proposes a novel pipeline for the FSRE task based on queRy-information guided Attention and adaptive Prototype fuSion, namely RAPS. Specifically, RAPS first derives the relation prototype by the query-information guided attention module, which exploits rich interactive information between the support instances and the query instances, in order to obtain more accurate initial prototype representations. Then RAPS elaborately combines the derived initial prototype with the relation information by the adaptive prototype fusion mechanism to get the integrated prototype for both train and prediction. Experiments on the benchmark dataset FewRel 1.0 show a significant improvement of our method against state-of-the-art methods.
translated by 谷歌翻译
Recent 3D-based manipulation methods either directly predict the grasp pose using 3D neural networks, or solve the grasp pose using similar objects retrieved from shape databases. However, the former faces generalizability challenges when testing with new robot arms or unseen objects; and the latter assumes that similar objects exist in the databases. We hypothesize that recent 3D modeling methods provides a path towards building digital replica of the evaluation scene that affords physical simulation and supports robust manipulation algorithm learning. We propose to reconstruct high-quality meshes from real-world point clouds using state-of-the-art neural surface reconstruction method (the Real2Sim step). Because most simulators take meshes for fast simulation, the reconstructed meshes enable grasp pose labels generation without human efforts. The generated labels can train grasp network that performs robustly in the real evaluation scene (the Sim2Real step). In synthetic and real experiments, we show that the Real2Sim2Real pipeline performs better than baseline grasp networks trained with a large dataset and a grasp sampling method with retrieval-based reconstruction. The benefit of the Real2Sim2Real pipeline comes from 1) decoupling scene modeling and grasp sampling into sub-problems, and 2) both sub-problems can be solved with sufficiently high quality using recent 3D learning algorithms and mesh-based physical simulation techniques.
translated by 谷歌翻译
Sleep stage recognition is crucial for assessing sleep and diagnosing chronic diseases. Deep learning models, such as Convolutional Neural Networks and Recurrent Neural Networks, are trained using grid data as input, making them not capable of learning relationships in non-Euclidean spaces. Graph-based deep models have been developed to address this issue when investigating the external relationship of electrode signals across different brain regions. However, the models cannot solve problems related to the internal relationships between segments of electrode signals within a specific brain region. In this study, we propose a Pearson correlation-based graph attention network, called PearNet, as a solution to this problem. Graph nodes are generated based on the spatial-temporal features extracted by a hierarchical feature extraction method, and then the graph structure is learned adaptively to build node connections. Based on our experiments on the Sleep-EDF-20 and Sleep-EDF-78 datasets, PearNet performs better than the state-of-the-art baselines.
translated by 谷歌翻译